18 research outputs found

    Why did I fail? A Causal-based Method to Find Explanations for Robot Failures

    Get PDF
    Robot failures in human-centered environments are inevitable. Therefore, the ability of robots to explain such failures is paramount for interacting with humans to increase trust and transparency. To achieve this skill, the main challenges addressed in this paper are I) acquiring enough data to learn a cause-effect model of the environment and II) generating causal explanations based on that model. We address I) by learning a causal Bayesian network from simulation data. Concerning II), we propose a novel method that enables robots to generate contrastive explanations upon task failures. The explanation is based on setting the failure state in contrast with the closest state that would have allowed for successful execution, which is found through breadth-first search and is based on success predictions from the learned causal model. We assess the sim2real transferability of the causal model on a cube stacking scenario. Based on real-world experiments with two differently embodied robots, we achieve a sim2real accuracy of 70% without any adaptation or retraining. Our method thus allowed real robots to give failure explanations like, 'the upper cube was dropped too high and too far to the right of the lower cube.'Comment: submitted to IEEE Robotics and Automation Letters (February 2022

    Why Did I Fail? a Causal-Based Method to Find Explanations for Robot Failures

    Get PDF
    Robot failures in human-centered environments are inevitable. Therefore, the ability of robots to explain such failures is paramount for interacting with humans to increase trust and transparency. To achieve this skill, the main challenges addressed in this paper are I) acquiring enough data to learn a cause-effect model of the environment and II) generating causal explanations based on the obtained model. We address I) by learning a causal Bayesian network from simulation data. Concerning II), we propose a novel method that enables robots to generate contrastive explanations upon task failures. The explanation is based on setting the failure state in contrast with the closest state that would have allowed for successful execution. This state is found through breadth-first search and is based on success predictions from the learned causal model. We assessed our method in two different scenarios I) stacking cubes and II) dropping spheres into a container. The obtained causal models reach a sim2real accuracy of 70% and 72%, respectively. We finally show that our novel method scales over multiple tasks and allows real robots to give failure explanations like “the upper cube was stacked too high and too far to the right of the lower cube.

    A causal-based approach to explain, predict and prevent failures in robotic tasks

    Get PDF
    Robots working in human environments need to adapt to unexpected changes to avoid failures. This is an open and complex challenge that requires robots to timely predict and identify the causes of failures in order to prevent them. In this paper, we present a causal-based method that will enable robots to predict when errors are likely to occur and prevent them from happening by executing a corrective action. Our proposed method is able to predict immediate failures and also failures that will occur in the future. The latter type of failure is very challenging, and we call them timely-shifted action failures (e.g., the current action was successful but will negatively affect the success of future actions). First, our method detects the cause–effect relationships between task executions and their consequences by learning a causal Bayesian network (BN). The obtained model is transferred from simulated data to real scenarios to demonstrate the robustness and generalization of the obtained models. Based on the causal BN, the robot can predict if and why the executed action will succeed or not in its current state. Then, we introduce a novel method that finds the closest success state through a contrastive Breadth-First-Search if the current action was predicted to fail. We evaluate our approach for the problem of stacking cubes in two cases; (a) single stacks (stacking one cube) and; (b) multiple stacks (stacking three cubes). In the single-stack case, our method was able to reduce the error rate by 97%. We also show that our approach can scale to capture various actions in one model, allowing us to measure the impact of an imprecise stack of the first cube on the stacking success of the third cube. For these complex situations, our model was able to prevent around 95% of the stacking errors. Thus, demonstrating that our method is able to explain, predict, and prevent execution failures, which even scales to complex scenarios that require an understanding of how the action history impacts future actions

    Hierarchical Reinforcement Learning based on Planning Operators

    Full text link
    Long-horizon manipulation tasks such as stacking represent a longstanding challenge in the field of robotic manipulation, particularly when using reinforcement learning (RL) methods which often struggle to learn the correct sequence of actions for achieving these complex goals. To learn this sequence, symbolic planning methods offer a good solution based on high-level reasoning, however, planners often fall short in addressing the low-level control specificity needed for precise execution. This paper introduces a novel framework that integrates symbolic planning with hierarchical RL through the cooperation of high-level operators and low-level policies. Our contribution integrates planning operators (e.g. preconditions and effects) as part of the hierarchical RL algorithm based on the Scheduled Auxiliary Control (SAC-X) method. We developed a dual-purpose high-level operator, which can be used both in holistic planning and as independent, reusable policies. Our approach offers a flexible solution for long-horizon tasks, e.g., stacking a cube. The experimental results show that our proposed method obtained an average of 97.2% success rate for learning and executing the whole stack sequence, and the success rate for learning independent policies, e.g. reach (98.9%), lift (99.7%), stack (85%), etc. The training time is also reduced by 68% when using our proposed approach

    Work in Progress - Automated Generation of Robotic Planning Domains from Observations

    No full text
    In this paper, we report the results of our latest work on the automated generation of planning operators from human demonstrations, and we present some of our future research ideas. To automatically generate planning operators, our system segments and recognizes different observed actions from human demonstrations. We then proposed an automatic extraction method to detect the relevant preconditions and effects from these demonstrations. Finally, our system generates the associated planning operators and finds a sequence of actions that satisfies a user-defined goal using a symbolic planner. The plan is deployed on a simulated TIAGo robot. Our future research directions include learning from and explaining execution failures and detecting cause-effect relationships between demonstrated hand activities and their consequences on the robot\u27s environment. The former is crucial for trust-based and efficient human-robot collaboration and the latter for learning in realistic and dynamic environments

    The Value of Diversity in the Robotics and Automation Society [Women in Engineering]

    No full text
    In 2022, a new team was formed to lead the Women in Engineering (WiE) Committee of the IEEE Robotics and Automation Society (RAS). The new team is led by Karinne Ramirez-Amaro from the Chalmers University of Technology as the new chair, with the fantastic support of two co-chairs: Daniel Leidner from the German Aerospace Center (DLR) and Georgia Chalvatzaki from the Technical University of Darmstadt. Together, we are committed to encouraging and making a significant advance in a diverse environment within the Society to promote an inclusive and equitable culture

    Automatic segmentation and recognition of human activities from observation based on semantic reasoning

    No full text
    Abstract — Automatically segmenting and recognizing human activities from observations typically requires a very complex and sophisticated perception algorithm. Such systems would be unlikely implemented on-line into a physical system, such as a robot, due to the pre-processing step(s) that those vision systems usually demand. In this work, we present and demonstrate that with an appropriate semantic representation of the activity, and without such complex perception systems, it is sufficient to infer human activities from videos. First, we will present a method to extract the semantic rules based on three simple hand motions, i.e. move, not move and tool use. Additionally, the information of the object properties either ObjectActedOn or ObjectInHand are used. Such properties encapsulate the information of the current context. The above data is used to train a decision tree to obtain the semantic rules employed by a reasoning engine. This means, we extract lower-level information from videos and we reason about the intended human behaviors (high-level). The advantage of the abstract representation is that it allows to obtain more generic models out of human behaviors, even when the information is obtained from different scenarios. The results show that our system correctly segments and recognizes human behaviors with an accuracy of 85%. Another important aspect of our system is its scalability and adaptability toward new activities, which can be learned on-demand. Our system has been fully implemented on a humanoid robot, the iCub to experimentally validate the performance and the robustness of our system during on-line execution of the robot. I

    A Survey on Semantics and Understanding of Human Activities

    No full text
    This paper presents semantic-based methods for the\ua0understanding\ua0of human movements in\ua0robotic applications. To\ua0understand\ua0human movements, robots need to first, recognize the observed or demonstrated human activities, and secondly, learn different parameters to execute an action or robot behavior. In order to achieve that, several challenges need to be addressed such as the\ua0automatic segmentation\ua0of human activities, identification of important features of actions, determine the correct sequencing between activities, and obtain the correct mapping between the continuous data and the symbolic and\ua0semantic interpretations\ua0of the human movements. This paper aims to present state-of-the-art semantic-based approaches, especially the new emerging approaches that tackle the challenges of finding generic and compact semantic models for the robotics domain. Finally, we will highlight potential breakthroughs and challenges for the next years such as achieving scalability, better\ua0generalization, compact and flexible models, and higher system accuracy

    Automated Generation of Robotic Planning Domains from Observations

    No full text
    Automated planning enables robots to find plans to achieve complex, long-horizon tasks, given a planning domain. This planning domain consists of a list of actions, with their associated preconditions and effects, and is usually manually defined by a human expert, which is very time-consuming or even infeasible. In this paper, we introduce a novel method for generating this domain automatically from human demonstrations. First, we automatically segment and recognize the different observed actions from human demonstrations. From these demonstrations, the relevant preconditions and effects are obtained, and the associated planning operators are generated. Finally, a sequence of actions that satisfies a user-defined goal can be planned using a symbolic planner. The generated plan is executed in a simulated environment by the TIAGo robot. We tested our method on a dataset of 12 demonstrations collected from three different participants. The results show that our method is able to generate executable plans from using one single demonstration with a 92% success rate, and 100% when the information from all demonstrations are included, even for previously unknown stacking goals
    corecore